perm filename CONTRO[S86,JMC]1 blob
sn#814415 filedate 1986-04-04 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00009 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 contro[s86,jmc] The great Spring 86 AI controversy
C00003 00003 The invitation to the dance.
C00007 00004 The cast of characters.
C00009 00005 The play begins.
C00021 00006 Dreyfus:
C00040 00007 Winograd:
C00051 00008 Searle:
C00074 00009 vijay.ernie@berkeley.edu
C00077 ENDMK
C⊗;
contro[s86,jmc] The great Spring 86 AI controversy
The invitation to the dance.
Received: from ERNIE.BERKELEY.EDU by SU-AI.ARPA with TCP; 3 Apr 86 15:14:58 PST
∂07-Feb-86 1529 vijay@ernie.berkeley.edu
Received: from ERNIE.BERKELEY.EDU by SU-AI.ARPA with TCP; 7 Feb 86 15:29:14 PST
Received: by ernie.berkeley.edu (5.44/1.8)
id AA11470; Fri, 7 Feb 86 15:27:39 PST
Date: Fri, 7 Feb 86 15:27:39 PST
From: vijay@ernie.berkeley.edu (Vijay Ramamoorthy)
Message-Id: <8602072327.AA11470@ernie.berkeley.edu>
To: jmc@su-ai.arpa
Hello Dr. McCarthy,
I, Rajeev Aggarwal, and John Searle are doing a study which
is quite similar to the survey published by Daniel Bobrow in the
AI Journal this year. He will be helping us to make a publishable
version of this study for the AI Journal.
Basically, the whole study can be described/outlined in three stages.
In the first, we have three participants: Hubert/Stuart Dreyfus,
John Searle, and David Rumelhart. They have agreed to provide
approximate 2 page specific criticisms of traditional AI.
(Terry Winograd may also be participating, but this is not certain yet).
In the second stage, four computer scientists actively doing
work in the field will be providing responses to any parts
of the criticisms that they feel need to be refuted, based
on their work, other AI work, or their own philosophies. We
would very much like you to be one of the four participants
in this stage.
All the participants sincerely believe that your presence and views
are very important to such a discussion - for their own benefit and
the various readerships (publications) that we hope will see various
versions of this discussion.
In the last, third stage, we intend to get one last brief
response/comments from the critical side and then a final
statement from the AI researchers.
The exchange of communications will be organized in a manner
so that each participant will have a reasonable amount of time
to respond to other participants, one at a time.
If it is okay with you, we would like to conduct all communication
over the network since this will make the entire study go more
rapidly. We hope you will be able to participate and let
us know soon of your decision. We believe this will be
quite an interesting discussion!
Sincerely,
Vijay Ramamoorthy
The cast of characters.
∂13-Feb-86 1501 vijay@ernie.berkeley.edu
Received: from ERNIE.BERKELEY.EDU by SU-AI.ARPA with TCP; 13 Feb 86 15:00:49 PST
Received: by ernie.berkeley.edu (5.44/1.8)
id AA03983; Thu, 13 Feb 86 14:59:05 PST
Date: Thu, 13 Feb 86 14:59:05 PST
From: vijay@ernie.berkeley.edu (Vijay Ramamoorthy)
Message-Id: <8602132259.AA03983@ernie.berkeley.edu>
To: jmc@su-ai.arpa
Hello Dr. McCarthy,
Thank you for responding so promptly; The complete list of
participants are John Searle, Hubert and Stuart Dreyfus, David Rumelhart,
Seymour Pappert, Joseph Weizenbaum, Eugene Charniak,
Douglas Hofstadter (in a "middle" position), Terry Winograd,
and yourself.
Next week we will be sending out complete information on
the discussion.
Sincerely,
Vijay Ramamoorthy
The play begins.
∂13-Mar-86 1941 vijay@ernie.berkeley.edu
Received: from ERNIE.BERKELEY.EDU by SU-AI.ARPA with TCP; 13 Mar 86 19:41:18 PST
Received: by ernie.berkeley.edu (5.45/1.9)
id AA13465; Thu, 13 Mar 86 19:42:23 PST
Date: Thu, 13 Mar 86 19:42:23 PST
From: vijay@ernie.berkeley.edu (Vijay Ramamoorthy)
Message-Id: <8603140342.AA13465@ernie.berkeley.edu>
To: jmc@su-ai.arpa
***************************************************************
FINALLY --- IT'S HERE!!!
OUR AI DISCUSSION WILL BEGIN NEXT WEEK!!
***************************************************************
Thank you again for your participation, we hope that everyone
will benefit from having the advantage of putting forth their
ideas and receiving responses from such a diverse and dis-
tinguished group of people.
Following is some general information about the written dis-
cussion this study entails:
PURPOSES: We would like this discussion to be a free expression
of ideas about artificial intelligence. It will start with a
series of `critiques' on the traditional approaches that many AI
researchers have, and currently are taking. This will probably be
enough to provoke many responses against the criticisms, and then
responses to those responses. But it needn't always; agreement
is perhaps one of the best things to come out of any discussion,
and we hope that it will emerge in some form from this one. Par-
ticipants will have the consequence of sharpening their positions
and ideologies, and since this is a written discussion, everyone
will have the chance to get at the heart of the beliefs of others
- both by allowing time to think about certain ideas, and by be-
ing able to formulate responses without having to publish them
each time.
We also hope that "this meeting of the minds" will be a testing
grounds for new ideas/hypothesis to gain feedback from others.
There really isn't one sharp line that divides everyone, for al-
most no one agrees completely with anybody else anyway.
FRAMEWORK: There are 3 general stages to this discussion. The
first two will be somewhat formal, with the third being a general
"anything goes" informal exchange. They are outlined as follows:
Stage 1: This stage will consist of some criticisms on
current/traditional AI research; this is basically
to start the discussion; it will be given from group
one of the participants (as we have divided them)
to the other; the each of the criticisms will be
approximately 2 pages.
Stage 2: This stage will be the first response to these criti-
cisms; Each participant from group 2 will have the
opportunity to respond (support/agree or criticize)
anything in each of the critical papers - based on
their research, philosophies, or beliefs. These
responses will then be passed on to the group 1 par-
ticipants.
Stage 3: This last stage will partly build on the first two,
and be supplemented by whatever else comes up. Here
there will be rapid exchanges amongst the various
participants. Everyone will be able to monitor the
the discussion as it progresses.
PARTICIPANTS: This grouping really only applies to the first
2 stages; in the last, it is not important.
Group 1 Group 2
John Searle John McCarthy
Stuart/Hubert Dreyfus Daniel Bobrow
Terry Winograd Seymour Papert
Joseph Weizenbaum Eugene Charniak
In The middle:
Douglas Hofstadter
David Rumelhart
The division was not meant to be a major classification of any
type. It was arrived at based on past stances to traditional
information-processing oriented research. It's only purpose is
to provide part of a knowledge base/foundation for Stage 3.
One note about "In the Middle": for purposes of the first and
second stages, we decided to have Douglas Hofstatder and David
Rumelhart in a position where they will converse with both sides.
TIMETABLE: At the outset, we told everyone that there would be
"a reasonable amount of time to respond." This really applies
to the first two stages, where we would like to keep it to 2
weeks for the production of the first stage, and 2 weeks later
for the responses in the second stage. The third stage will
probably last several weeks, but this is generally open.
The time we have in mind for obtaining the criticisms of stage 1
is... FRIDAY, MARCH 21. At that time, we will pass all of the
papers on to all the group 2 participants. Two weeks from then,
we request all the group 2 responses to be in by FRIDAY, APRIL 4.
These responses will be forwarded to the group 1 members, and the
informal (stage 3) discussion will then begin (probably the most
interesting part). At that point, responses to specific people
will be forwarded immediately to the individuals involved. At
the end of each week, a transcript of the entire week's discus-
sion will be distributed to everyone.
COMMUNICATIONS: The entire discussion, as we have mentioned, will
take place entirely by electronic mail -- the fastest form of
written communication of this sort available to everyone. The
account that will be dedicated to handling all the communications
will be the following:
vijay@ernie.berkeley.edu
Once we start, all information will be processed immediately
after it is received. All messages received will be ack-
nowledged immediately and we hope that everyone will do the same
also. E-mail is reliable, but not "that" reliable.
PUBLICATION: Daniel Bobrow has been kind enough to offer his
help for collating multitudes of responses for publication in the
AI Journal. Furthermore, there will be a neutral introduction
and analysis to the entire discussion.
However, we will also be offering various editions of this dis-
cussion to various prominent national science publications. Our
philosophy here is that noting the quality of articles on AI, it
is clearly better that the current ideas driving AI research be
discussed by those directly involved with it, not by journalists
left to interpret it.
Furthermore, it almost goes without saying that everyone partici-
pating will receive a final copy of the sum total of all com-
munications that go on between the various participants in this
discussion.
Any further questions/problems, please forward them to this
account: vijay@ernie.berkeley.edu
Sincerely,
Vijay Ramamoorthy, U.C. Berkeley (Computer Science)
Rajeev Aggarwal, Bell Laboratories
John Searle, Dept of Philosophy, U.C. Berkeley
Daniel Bobrow, Xerox
(Project Organizers)
P.S. Remember, please acknowledge receipt of this message
through the account you would like us to send all your
responses/coments/information to.
Dreyfus:
∂01-Apr-86 1320 vijay@ernie.berkeley.edu AI DISC: DREYFUS
Received: from ERNIE.BERKELEY.EDU by SU-AI.ARPA with TCP; 1 Apr 86 13:20:32 PST
Received: by ernie.berkeley.edu (5.45/1.9)
id AA27558; Tue, 1 Apr 86 13:20:57 PST
Date: Tue, 1 Apr 86 13:20:57 PST
From: vijay@ernie.berkeley.edu (Vijay Ramamoorthy)
Message-Id: <8604012120.AA27558@ernie.berkeley.edu>
To: jmc@su-ai.arpa
Subject: AI DISC: DREYFUS
Hello Dr. McCarthy,
You might have seen a version of the following given by Dreyfus,
however, both Stuart and Hubert Dreyfus say that as a starting
point - it articulates their ideas most clearly.
------------------------------------------------------------------
CONVENTIONAL AI: A DEGENERATING RESEARCH PROGRAM
Looking back over 30 years, the field of conventional
rule-based AI appears more and more to be a perfect example
of what Imre Lakatos has called a degenerating research pro-
gram.[1] AI began auspiciously with Newell and Simon's work
at RAND. In retrospect, we see we failed to appreciate the
importance of this early work. Newell and Simon proved that
computers could do more than calculations. They demonstrated
that the symbols computers manipulate could stand for any-
thing, including features of the real world, and programs
could be used as rules for relating these features, so that
computers acting as logic machines could be used to simulate
certain important aspects of intelligence. Thus the
information-processing model of the mind was born. By 1970
AI, using symbolic representations, had turned into a flour-
ishing research program. Marvin Minsky, head of the M.I.T.
program, predicted that "within a generation the problem of
creating `artificial intelligence' will be substantially
solved."[2]
Then, rather suddenly, the field ran into unexpected
difficulties. The trouble started, as far as we can tell,
with the failure of attempts to program children's story
understanding. It turned out to be much harder than one
←←←←←←←←←←←←←←←←←←←←←←←←←
$9 [1] Imre Lakatos, Philosophical Papers, ed. John Wor-
rall, Cambridge University Press, 1978.
$9 [2] Marvin Minsky, Computation: Finite and Infinite
Machines, Prentice Hall, 1967, p. 2.
- 2 -
expected to formulate the required theory of common sense.
It was not, as Minsky had hoped, just a question of catalo-
guing a few hundred thousand facts. The common sense
knowledge problem became the center of concern. Minsky's
mood changed completely in the course of fifteen years. He
told a reporter: "the AI problem is one of the hardest sci-
ence has ever undertaken."[3]
Related problems were also noted although not often
seen as related. Cognitivists discovered the importance of
images and prototypes in human understanding and logic
machines turned out to be very poor at dealing with either
of them. Gradually most researchers have become convinced
that human beings form images and compare them by means of
holistic processes quite different from the logical opera-
tions computers perform on descriptions.[4] Some AI workers
hope for help from parallel processors, machines that can do
many things at once and hence can make millions of infer-
ences per second, but if human image processing operates on
holistic representations that are not descriptions and
relates these representations in other than rule-like ways,
←←←←←←←←←←←←←←←←←←←←←←←←←
$9 [3] Gina Kolata, "How Can Computers Get Common
Sense?", Science, Vol. 217, 24 September 1982, p. 1237.
$9 [4] For an account of the experiments which show that
human beings can actually rotate, scan, and otherwise
use images, and the unsuccessful attempts to understand
these capacities in terms of programs which use
features and rules, see Imagery, Ned Block, ed., M.I.T.
Press/Bradford Books, 1981. Also Ned Block, "Mental
Pictures and Cognitive Science," The Philosophical Re-
view, Oct. 1983, pp. 499-541.
- 3 -
this appeal to parallel processing misses the point. The
point is that human beings are able to form and compare
their images in a way that cannot be captured by any number
of procedures that operate on symbolic descriptions.
Another human capacity which computers functioning as
analytic engines cannot copy is the ability to recognize the
similarity between whole images. Recognizing two patterns as
similar, which seems to be a direct process for human
beings, is for a logic machine a complicated process of
first defining each pattern in terms of objective features
and then determining whether, by some objective criterion,
the set of features defining one pattern match the features
defining the other pattern.
As we see it, all AI's problems are versions of one
basic problem. Current AI is based on the idea which has
been around in philosophy since Descartes, that all under-
standing consists in forming and using appropriate represen-
tations. In conventional AI these have been assumed to be
symbolic descriptions. So common sense understanding has to
be understood as some vast body of propositions, beliefs,
rules, facts and procedures. AI's failure to come up with
the appropriate symbolic descriptions is called the common
sense knowledge problem. As thus formulated this problem
has so far turned out to be insoluble, and we predict it
will never be solved.
What hides this impasse is the conviction that the
- 4 -
common sense knowledge problem must be solvable since human
beings have obviously solved it. But human beings may not
normally use common sense knowledge at all. What common
sense understanding amounts to might well be everyday know-
how. By know-how we do not mean procedural rules, but know-
ing what to do in a vast number of special cases. For exam-
ple, common sense physics has turned out to be extremely
hard to spell out in a set of facts and rules. When one
tries, one either requires more common sense to understand
the facts and rules one finds or else one produces formulas
of such complexity that it seems highly unlikely they are in
a child's mind.
Theoretical physics also requires background skills
which may not be formalizable, but the domain itself can be
described by abstract laws that make no reference to
specific cases. AI researchers conclude that common sense
physics too must be expressible as a set of abstract princi-
ples. But it just may be that the problem of finding a
theory of common sense physics is insoluble. By playing
almost endlessly with all sorts of liquids and solids for
several years the child may simply have built up a repertory
of prototypical cases of solids, liquids, etc. and typical
skilled response to their typical behavior in typical cir-
cumstances. There may be no theory of common sense physics
more simple than a list of all such typical cases and even
such a list is useless without a similarity-recognition
ability. If this is indeed the case, and only further
- 5 -
research will give us an answer, we could understand the
initial success and eventual failure of AI. It would seem
that AI techniques should work in isolated domains but fail
in areas such as natural language understanding, speech
recognition, story understanding, and learning where the
structure of the problem mirrors the structure of our every-
day physical and social world.
In 1979 we predicted stagnation for AI, but also
predicted the success of programs called expert systems
which attempted to produce intelligent behavior in domains
such as medical diagnosis and spectrograph analysis which
are completely cut off from everyday common sense. Now we
think we were uncharacteristically over-optimistic concern-
ing the future of intelligent logic machines. It has turned
out that, except in certain structured domains where what
constitutes the relevant facts and how these facts are
changed by decisions is known objectively, no expert system
based on rules extracted by questioning experts does as well
as the experts themselves, even though the computer is pro-
cessing with incredible speed and unerring accuracy what are
supposed to be the experts' rules.
In our just published book Mind Over Machine we attempt
to explain this surprising development. We argue that
beginners in a domain are given principles to follow, but
most domains in which human beings acquire skills and
achieve expertise are, like everyday physics, domains which
- 6 -
do not lend themselves to being understood at an expert
level in terms of principles.[5] Therefore experts, as even
Edward Feigenbaum has noted, are never satisfied with gen-
eral principles but prefer to think of their field of exper-
tise as a huge set of special cases.[6] No wonder expert
systems based on principles abstracted from experts do not,
in unstructured domains, capture those experts' expertise
and so never do as well as the experts themselves.
We still think, as we did in 1965, that someday comput-
ers may be intelligent just as one day the alchemists' dream
of transmuting lead into gold came true. AI may be
achieved, however, only after researchers give up the idea
of finding a local symbolic representation of high-order
macrostructural features describing the world and turn
instead to some sort of microstructural distributed, holis-
tic representation that is directly amenable to association,
generalization and completion. If this is, indeed, the
direction AI should go, it will be aided by the massively
parallel machines on the horizon, but not because parallel
machines can make millions of inferences per second, but
because faster, more parallel architecture can better imple-
ment the kind of neurally inspired processing that does not
←←←←←←←←←←←←←←←←←←←←←←←←←
$9 [5] Hubert L. Dreyfus and Stuart E. Dreyfus, Mind
over Machine, Free Press/Macmillan (1986).
$9 [6] Edward A. Feigenbaum and Pamela McCorduck, The
Fifth Generation, Artificial Intelligence and Japan's
Computer Challenge to the World, Addison-Wesley Pub-
lishing Company, 1983, p. 82.
- 7 -
use macrostructural representations of rules and features at
all.[7]
Hubert L. Dreyfus and Stuart E. Dreyfus
University of California, Berkeley
$9←←←←←←←←←←←←←←←←←←←←←←←←←
$9 [7] See for example D. Rumelhart and J. McClelland,
Parallel Distributed Processing: Explorations in the
Microstructure of Cognition, MIT Press/ Bradford Books,
1986.
Winograd:
∂01-Apr-86 1325 vijay@ernie.berkeley.edu AI DISC: Winograd Position
Received: from ERNIE.BERKELEY.EDU by SU-AI.ARPA with TCP; 1 Apr 86 13:25:24 PST
Received: by ernie.berkeley.edu (5.45/1.9)
id AA27764; Tue, 1 Apr 86 13:25:53 PST
Date: Tue, 1 Apr 86 13:25:53 PST
From: vijay@ernie.berkeley.edu (Vijay Ramamoorthy)
Message-Id: <8604012125.AA27764@ernie.berkeley.edu>
To: jmc@su-ai.arpa
Subject: AI DISC: Winograd Position
The best thing to do in a short position paper is to put forth some
clear and probably controversial assertions, without giving elaborate
motivations and justifications contrasting them with other ways of
understanding. These fuller discussions appear at length in my recent
book with Fernando Flores, Understanding Computers and Cognition.
1. In characterizing AI, there are two very different starting points.
We can take it as the general enterprise of developing intelligent
artifacts (by any physical means whatsoever), or as the expression of a
coherent methodology and theory.
2. To the extent that AI means "anything anyone might invent that shows
intelligence," discussion belongs in the realm of science fiction, since
there is little concrete to be said. To the extent we are talking about
what people have really done in AI, there is a strong coherent ideology,
variously labelled the "computational paradigm," "cognitive paradigm,"
"physical symbol system hypothesis," etc. Most of the existing AI
enterprise operates within it (including, though to a somewhat lesser
extent, the current work on connectionism).
3. The cognitive symbol-processing approach will have useful
applications, but these will not be as widespread or significant as
proponents claim. In general, those tasks that are closer to
"puzzle-solving" will be best covered, and those closer to "common
sense" and "ordinary understanding" will remain unmechanized. This
applies not only to existing technology, but to any of the foreseeable
improvements following in the general scientific direction that is being
pursued (including "massively" parallel machines, nonmonotonic
reasoning, etc., etc.).
4. I am not so concerned with the danger that attempts to fully
duplicate human intelligence will fail (as long as people don't bank to
heavily on optimistic predictions), but rather that the enterprise has
an effect of redefining intelligence---of shaping human understanding of
what is to count as "intelligent." In particular, AI is based on a
"rationalistic" account of human thought and language, which focusses on
systematic reasoning based on symbolic representations within an
explicitly articulated domain of features. This approach has important
uses, but systematically undervalues other aspects of intelligent human
action, both in the individual and within a tradition. Emphasis on
rationalism is not new to AI, having a long history in Western thought
(beginning with Plato, expressed more thoroughly by Descartes).
Computers (and AI in particular) give it a powerful operational form.
5. A healthy skepticism about AI (and the rationalistic orientation in
general) is needed as a guide for design of computer systems that make
sense. We are easily seduced by the image of the "thinking machine"
into claiming that the problems of designing and working with computer
technology will be solved when the machines get smart enough. The Fifth
Generation hoopla (both the Japanese original report and later books and
responses) is an egregious example of this fallacy. The phenomena of
"computerization" (in its pejorative sense) derive from the
reorganization of social systems to fit the properties of particular
computer implementations. It will not be prevented by having "smart"
machines, and in fact is accelerated by advocating the use of computers
in less structured areas of human life and society.
6. My major interest lies in research (both theoretical and applied)
that will support the development of technology to provide the
advantages of using computers while anticipating and avoiding negative
effects on people's work and lives. The rationalistic tradition does
not provide a sufficient basis for this design, since it takes as its
starting point an impoverished account of what people do. A new
starting point will come from an understanding of the phenomenology of
human communication and use of technology. We can draw on the
philosophical tradition of phenomenology, and its insights can be given
concrete operational meaning in the context of design.
7. It is often claimed that concerns of "social impact" should be left
to the political process, or perhaps to engineers who are directly
developing products, but should be ignored in pursuing "pure science."
These (often self-serving) claims are based on a rationalistic (and
narrow) understanding of science as a human enterprise. They might be
true for some idealized scientist living self-sufficiently and
incommunicado on an isolated island, but are irrelevant to the real
world. The continuing enterprise of any science depends on a public
consensus that supports the allocation of resources to it. This
consensus is maintained by a process of publication and "education" in
which the ideology of the science is promulgated and justified. As
members of the "AI community" we all participate in this, through
writing, talking, and teaching.
8. AI scientists and engineers have a responsibility to take their work
seriously---to recognize that both their inventions and their words have
a serious effect and to consider the effects consciously. The issue
isn't censorship, but positive action. It is useless to try to label
work that "shouldn't be done," but instead we can use our knowledge and
status to advance the things that "should be done," rather than just
those that "can be done." I anticipate a gradual shift of effort and
emphasis within the field as we go beyond the the early science-fiction
dreams that motivated the field, and look at directions for new research
(including theoretical research) that better deals with the realities of
human society. In particular, computers (using AI techniques) will be
understood in terms of the complex and productive ways in which they can
serve as a medium for human-to-human communication, rather than being
personified as surrogate people.
-TERRY WINOGRAD
Searle:
Received: by ernie.berkeley.edu (5.45/1.9)
id AA28407; Thu, 3 Apr 86 15:15:33 PST
Date: Thu, 3 Apr 86 15:15:33 PST
From: vijay@ernie.berkeley.edu (Vijay Ramamoorthy)
Message-Id: <8604032315.AA28407@ernie.berkeley.edu>
To: jmc@su-ai.arpa
TURING THE CHINESE ROOM
John R. Searle
Since various garbled versions of my Chinese room argu-
ment continue to be current in the CS-AI community, I intend
first to set the record straight. Then I intend to review
the current state of the argument concerning strong AI.
Among other things, I am accused of holding the prepos-
terous view that somehow in principle, as a matter of logic,
only carbon-based or perhaps only neuronal-based substances
could have the sorts of thoughts and feelings that humans
and other animals have. I have repeatedly and explicitly
denounced this view. Indeed, I use a variation of the
Chinese room argument against it: simply imagine right now
that your head is opened up and inside is found not neurons
but something else, say, silicon chips. There are no purely
logical constraints that exclude any particular type of sub-
stance in advance.
My actual argument is very simple and can be set out in
a very few steps:
Definition 1. Strong AI is defined as the view that
the appropriately programmed digital computer with the right
inputs and outputs would thereby have a mind in exactly the
same sense that human beings have minds.
It is this view which I set out to refute.
Proposition 1. Programs are purely formal (i.e. syn-
tactical).
I take it this proposition needs no explanation for the
readers of this journal.
Proposition 2. Syntax is neither equivalent to nor
sufficient by itself for semantics.
I take it Proposition 2 is a conceptual or logical truth.
The point of the parable of the Chinese room is simply to
remind us of the truth of this rather obvious point: the man
in the room has all the syntax we can give him, but he does
not thereby acquire the relevant semantics. He still does
not understand Chinese.
It is worth pointing out that the distinction between
syntax and semantics is an absolutely foundational principle
behind modern logic, linguistics, and mathematics.
Proposition 3. Minds have mental contents (i.e. seman-
tic contents).
Now from these three propositions, it follows simply
that strong AI as defined is false. Specifically:
- 2 -
Conclusion 1 : Having a program -- any program by
itself -- is neither sufficient for nor equivalent to having
a mind.
Anyone who wishes to challenge my argument is going to
have to show at least that one of the three "axioms" is
false. It is very hard to see how anybody in the AI commun-
ity would want to challenge any of them. In particular, the
idea that the program is purely formal and the computer is a
formal symbol manipulating device is hardly something that I
need to teach workers in AI .
Once you appreciate the structure of the argument it is
easy to see that the standard replies to it in the strong AI
camp are simply irrelevant because they do not address them-
selves to the actual argument. Thus, for example, the "sys-
tems reply" (according to which `the room,' i.e. the whole
system, understands Chinese even though the man in the room,
i.e. the CPU, does not understand) simply misses the point.
The system has no way of getting from the syntax to the
semantics any more than the man does. The systems reply
cannot evade the sheer inexorability of the syntax/semantics
distinction. Which axioms does it wish to challenge? And
what grounds are being given for the challenge? The "robot
reply" (according to which if we put the system inside a
robot capable of causal interactions with the rest of the
world it would thereby acquire a semantics) simply concedes
that strong AI is false. It admits that syntax would not
be sufficient for semantics but insists that syntax plus
causation would produce a semantics. This involves a
separate mistake that I will come back to, but right now I
want to emphasize that none of the defenders of strong A I
-- a rather large group by the way -- has even begun to make
an effective challenge to any of the three principles I have
enunciated.
That is the argument against strong AI. It is that
simple. Anyone interested only in knowing if strong AI is
false can stop reading right here. But now out of this sim-
ple argument gives rise to a whole lot of other issues.
Some of them are a bit trickier, but I will keep the argu-
ment as simple as possible. As before, the "axioms" must be
obviously true and the derivations must be transparently
valid.
If creating a program is not sufficient for creating a
mind, what would be sufficient? What is the difference
between the relation that mental states have to brains and
the relation that programs have to their hardware implemen-
tations? What are the relations between mental processes
and brain processes anyhow? Well, obviously I am not going
to answer all of these questions in this short paper, but we
can learn a surprising amount by just reminding ourselves of
the logical consequences of what we know already.
- 3 -
One thing we know is this: quite specific neurophysio-
logical and neurobiological processes in the brain ←λc←λa←λu←λs←λe
those states, events, and processes that we think of as
specifically mental, both in humans and in the higher
animals. Of course the brain, like a computer, or for that
matter, like anything else, has a formal level (indeed many
formal levels) of description. But the ←λc←λa←λu←λs←λa←λl powers of the
brain by which it causes mental states have to do with
specific neurobiological features, specific electrochemical
properties of neurons, synapses, synaptic clefts, neuro-
transmitters, boutons, modules, and all the rest of it. We
can summarize this brute empirical fact about how nature
works as:
Proposition 4. Brains cause minds.
Let us think about this fact for a moment. The fact that a
system has mental states and that they are caused by neuro-
physiological processes has to be clearly distinguished from
the fact that a system that has mental states will charac-
teristically behave in certain ways. For a system might
have the mental states and still not behave appropriately
(if, say, the system is human and the motor nervous system
is interfered with in some way) and it might behave in a way
appropriate to having mental states without having any men-
tal states (if, say, a machine is set up to simulate the
input-output functions of the human system without having
the appropriate mental states -- in a familiar example, the
system might emit the right answers to the right questions
in Chinese and still not understand a word of Chinese.) So
the claim that Brains Cause Minds is not to be confused with
the claim that Minds Cause Behavior. Both are true. But
the claim that brains cause minds is a claim about the "bot-
tom up" powers of the brain. It is a summary of the claim
that lower level neurophysiological processes cause, e.g.,
thoughts and feelings. So far it says nothing at all about
external behavior. Just to keep the distinction straight,
let us write this separate proposition as:
Proposition 5. Minds cause behavior.
Now with P. 5, unlike P. 4, we are not talking about bottom
up forms of causation. We are simply summarizing such facts
as that my pain causes me to say "Ouch," my thirst causes me
to drink beer, etc.
From P. 4 and P. 5 by transitivity of causation, we
can infer
Conclusion 2. Brains cause behavior.
But now with the clear distinction between P. 4 & P. 5
and the observation that the input-output relations of human
beings are mediated by mental states, we can see the real
- 4 -
power and implications of P. 4. The causal powers of the
brain consist not merely in the fact stated by C. 2, that
brains causes it to be the case that in response to certain
stimuli a person will emit certain outputs (e.g. someone
pinches me and I say "Ouch"). The claim is rather that
specific biochemical features of the brain by bottom-up
forms of causation cause all of our mental phenomena includ-
ing those mental phenomena that mediate input-output rela-
tions, i.e. those mental phenomena that cause behavior.
(E.g., when someone pinches me and I say "Ouch" it is
because I feel a pain, and the sensation of pain is caused
by neuron firings in the thalamus and the somato sensory
cortex.)
We have then a clear distinction between the causal
powers of the brain to produce mental states and the causal
powers of the brain (together with the rest of the nervous
system) to produce input-output relations. I certainly have
not demonstrated that P. 4 is true, but I take it that its
truth is demonstrated by the past century of neurobiology.
And in any case, does anyone really doubt it? Does anyone
really doubt that all of our mental states are caused by
low level (e.g.neuronal) processes in the brain? Now from
P. 4, it follows trivially that
Conclusion 3. Any system capable of causing minds
would have to have causal powers equivalent to the bottom-up
causal powers of brains.
This is a trivial consequence of P. 4. Conclusion 3 does
not tell us anything about how those causal powers have to
be realized. As far as logical possibility is concerned
they could be realized, as I have pointed out on numerous
occasions, in green slime, silicon chips, vacuum tubes, or
for that matter, old beers cans. I have also claimed that,
as a matter of empirical fact, the probabilities that beer
cans, silicon chips, etc. have the same causal powers as
neurons is, roughly speaking, zero. The chances that chemi-
cal properties of silicon chips will be equal in their
bottom-up causal powers to the properties of neurons is
about as great as the chances that silicon chips will be
able to perform photosynthesis, lactation, digestion, or any
other specifically biological process. However, as I have
said repeatedly, that is an empirical claim on my part, not
something to be established by philosophical argument alone.
But, once again, does anyone in AI really question it? Is
there someone in AI so totally innocent of biological
knowledge that he thinks that the specfic biochemical powers
of human nervous systems can be duplicated in silicon chips
(transistors, vacuum tubes -- you name it)? Frankly, I
doubt it. I think the underlying mistake comes not from
ignorance but from confusion: the confusion is to suppose
that the same input-output function implies the presence of
the same bottom up causation. This view is enshrined in the
- 5 -
Turing test, but a moment's reflection is sufficient to show
that it is false. For example, at an appropriate level of
description an electrical engine can have the same input-
output function as a gasoline engine -- it can be designed
to respond in the same way to the same commands -- but it
works on completely different internal principles. Analo-
gously a system might pass the Turing test perfectly, it
might have the same information processing input-output
functions as those of a human being and still not have any
inner psychology whatever. It might be a total zombie.
We can now see what was wrong with the robot reply. It
had the wrong level of causation. The presence of input-
output causation that would enable a robot to function in
the world ←λi←λm←λp←λl←λi←λe←λs ←λn←λo←λt←λh←λi←λn←λg ←λw←λh←λa←λt←λe←λv←λe←λr about the presence of
bottom-up causation that would produce mental states.
Now from these elementary considerations, we can derive
two further conclusions.
Conclusion 4. The way that brains cause minds cannot
be solely in virtue of instantiating a computer program.
This conclusion follows from Proposition 4 and Conclusion 1,
that is, from the fact that brains do cause minds, and the
fact that programs are not enough, we can derive Conclusion
4.
Conclusion 5. Any artifact that we design, any system
that is created artifically for the purpose of creating
minds, could not do it solely in virtue of instantiating a
computer program , but would have to have causal powers
equivalent to the bottom-up causal powers of the brain.
This conclusion follows from Conclusions 1 and 3.
Now in all of the vast amount of literature that has
grown up around the Chinese room argument, I cannot see that
any of my critics have ever faced up to the sheer logical
structure of the argument. Which of its axioms do they wish
to deny? Which steps in the derivation do they wish to
challenge? What they have done rather, like Hofstatder and
Dennett, is persistently misquote me or attribute views to
me which are not only views I do not hold, but views which
I have explicitly denied. I am prepared to keep winning
this same argument over and over again, because its steps
are so simple and obvious, and its "assumptions" can hardly
be challenged by anybody who accepts the modern conception
of computation and indeed our modern scientific world view.
It can no longer be doubted that the classical concep-
tion of AI, the view that I have called strong AI, is pretty
much obviously false and rests on very simple mistakes. The
question then arises, if strong AI is false what ought AI to
be doing ? What is a reasonable research project for weak
AI? That is a topic for another paper.
-------
-John R. Searle
vijay.ernie@berkeley.edu
responses
I found the three papers disappointingly insubstantial.
I have written out responses to all of them, but I think I'll
hold on to the responses to Searle and the Dreyfus's until
I return from the two week trip to Europe I'm starting on
Sunday. Searle's was the most fun, because it offers the
opportunity to respond to him with the same vigor with
which he treats those with whose opinions he disagrees.
I'm sending you the response to Winograd in the
hopes that it will induce him to overcome his laziness
and subject more of the material from his book to criticism.
Here is the response to the little Winograd wrote.
I would defend the "rationalistic orientation" against the
attack given in Flores's and Winograd's book, which I have read,
had Winograd bothered to present some of the attack. This defense,
however, would have to admit that some of the examples
in the book present problems for previous formalizations used
in AI. Their proper treatment requires a considerable elaboration
of the existing, though new, methods of formalized non-monotonic
reasoning. They may also require something along the lines of
formalized contexts, a subject I have recently been studying.
I especially like the question about whether there is
water in the refrigerator, the issue of what knowledge of flies
may be ascribed to a frog's retina, and the Heidegger (or is
it Flores and Winograd) parable of hammering.
Oh well, too bad.
As for the stuff about considering the consequences of
one's work, one should indeed, but the one must remember that
the scientist isn't the boss of society and can neither force
society to use the results of science nor prevent it from doing
so.